Goto

Collaborating Authors

 cell segmentation


Single Tensor Cell Segmentation using Scalar Field Representations

Vargas, Kevin I. Ruiz, Galdino, Gabriel G., Ren, Tsang Ing, Cunha, Alexandre L.

arXiv.org Artificial Intelligence

We investigate image segmentation of cells under the lens of scalar fields. Our goal is to learn a continuous scalar field on image domains such that its segmentation produces robust instances for cells present in images. This field is a function parameterized by the trained network, and its segmentation is realized by the watershed method. The fields we experiment with are solutions to the Poisson partial differential equation and a diffusion mimicking the steady-state solution of the heat equation. These solutions are obtained by minimizing just the field residuals, no regularization is needed, providing a robust regression capable of diminishing the adverse impacts of outliers in the training data and allowing for sharp cell boundaries. A single tensor is all that is needed to train a \unet\ thus simplifying implementation, lowering training and inference times, hence reducing energy consumption, and requiring a small memory footprint, all attractive features in edge computing. We present competitive results on public datasets from the literature and show that our novel, simple yet geometrically insightful approach can achieve excellent cell segmentation results.


Glioma C6: A Novel Dataset for Training and Benchmarking Cell Segmentation

Malashin, Roman, Pashkevich, Svetlana, Ilyukhin, Daniil, Volkov, Arseniy, Yachnaya, Valeria, Denisov, Andrey, Mikhalkova, Maria

arXiv.org Artificial Intelligence

We present Glioma C6, a new open dataset for instance segmentation of glioma C6 cells, designed as both a benchmark and a training resource for deep learning models. The dataset comprises 75 high-resolution phase-contrast microscopy images with over 12,000 annotated cells, providing a realistic testbed for biomedical image analysis. It includes soma annotations and morphological cell categorization provided by biologists. Additional categorization of cells, based on morphology, aims to enhance the utilization of image data for cancer cell research. Glioma C6 consists of two parts: the first is curated with controlled parameters for benchmarking, while the second supports generalization testing under varying conditions. We evaluate the performance of several generalist segmentation models, highlighting their limitations on our dataset. Our experiments demonstrate that training on Glioma C6 significantly enhances segmentation performance, reinforcing its value for developing robust and generalizable models. The dataset is publicly available for researchers.


DM-QPMNET: Dual-modality fusion network for cell segmentation in quantitative phase microscopy

Chakraborty, Rajatsubhra, Espinosa-Momox, Ana, Haskin, Riley, Xu, Depeng, Porras-Aguilar, Rosario

arXiv.org Artificial Intelligence

ABSTRACT Cell segmentation in single-shot quantitative phase microscopy (ssQPM) faces challenges from traditional thresh-olding methods that are sensitive to noise and cell density, while deep learning approaches using simple channel concatenation fail to exploit the complementary nature of polarized intensity images and phase maps. We introduce DM-QPMNet, a dual-encoder network that treats these as distinct modalities with separate encoding streams. Our architecture fuses modality-specific features at intermediate depth via multi-head attention, enabling polarized edge and texture representations to selectively integrate complementary phase information. This content-aware fusion preserves training stability while adding principled multi-modal integration through dual-source skip connections and per-modality normalization at minimal overhead. Our approach demonstrates substantial improvements over monolithic concatenation and single-modality baselines, showing that modality-specific encoding with learnable fusion effectively exploits ssQPM's simultaneous capture of complementary illumination and phase cues for robust cell segmentation.


High-Throughput Low-Cost Segmentation of Brightfield Microscopy Live Cell Images

Das, Surajit, Roy, Gourav, Zun, Pavel

arXiv.org Artificial Intelligence

Live cell culture is crucial in biomedical studies for analyzing cell properties and dynamics in vitro. This study focuses on segmenting unstained live cells imaged with bright-field microscopy. While many segmentation approaches exist for microscopic images, none consistently address the challenges of bright-field live-cell imaging with high throughput, where temporal phenotype changes, low contrast, noise, and motion-induced blur from cellular movement remain major obstacles. We developed a low-cost CNN-based pipeline incorporating comparative analysis of frozen encoders within a unified U-Net architecture enhanced with attention mechanisms, instance-aware systems, adaptive loss functions, hard instance retraining, dynamic learning rates, progressive mechanisms to mitigate overfitting, and an ensemble technique. The model was validated on a public dataset featuring diverse live cell variants, showing consistent competitiveness with state-of-the-art methods, achieving 93% test accuracy and an average F1-score of 89% (std. 0.07) on low-contrast, noisy, and blurry images. Notably, the model was trained primarily on bright-field images with limited exposure to phase- contrast microscopy (<20%), yet it generalized effectively to the phase-contrast LIVECell dataset, demonstrating modality, robustness and strong performance. This highlights its potential for real- world laboratory deployment across imaging conditions. The model requires minimal compute power and is adaptable using basic deep learning setups such as Google Colab, making it practical for training on other cell variants. Our pipeline outperforms existing methods in robustness and precision for bright-field microscopy segmentation. The code and dataset are available for reproducibility 1.


IAUNet: Instance-Aware U-Net

Prytula, Yaroslav, Tsiporenko, Illia, Zeynalli, Ali, Fishman, Dmytro

arXiv.org Artificial Intelligence

Instance segmentation is critical in biomedical imaging to accurately distinguish individual objects like cells, which often overlap and vary in size. Recent query-based methods, where object queries guide segmentation, have shown strong performance. While U-Net has been a go-to architecture in medical image segmentation, its potential in query-based approaches remains largely unexplored. In this work, we present IAUNet, a novel query-based U-Net architecture. The core design features a full U-Net architecture, enhanced by a novel lightweight convolutional Pixel decoder, making the model more efficient and reducing the number of parameters. Additionally, we propose a Transformer decoder that refines object-specific features across multiple scales. Finally, we introduce the 2025 Revvity Full Cell Segmentation Dataset, a unique resource with detailed annotations of overlapping cell cytoplasm in brightfield images, setting a new benchmark for biomedical instance segmentation. Experiments on multiple public datasets and our own show that IAUNet outperforms most state-of-the-art fully convolutional, transformer-based, and query-based models and cell segmentation-specific models, setting a strong baseline for cell instance segmentation tasks. Code is available at https://github.com/SlavkoPrytula/IAUNet


A Lightweight and Extensible Cell Segmentation and Classification Model for Whole Slide Images

Shvetsov, Nikita, Kilvaer, Thomas K., Tafavvoghi, Masoud, Sildnes, Anders, Møllersen, Kajsa, Busund, Lill-Tove Rasmussen, Bongo, Lars Ailo

arXiv.org Artificial Intelligence

Developing clinically useful cell-level analysis tools in digital pathology remains challenging due to limitations in dataset granularity, inconsistent annotations, high computational demands, and difficulties integrating new technologies into workflows. To address these issues, we propose a solution that enhances data quality, model performance, and usability by creating a lightweight, extensible cell segmentation and classification model. First, we update data labels through cross-relabeling to refine annotations of PanNuke and MoNuSAC, producing a unified dataset with seven distinct cell types. Second, we leverage the H-Optimus foundation model as a fixed encoder to improve feature representation for simultaneous segmentation and classification tasks. Third, to address foundation models' computational demands, we distill knowledge to reduce model size and complexity while maintaining comparable performance. Finally, we integrate the distilled model into QuPath, a widely used open-source digital pathology platform. Results demonstrate improved segmentation and classification performance using the H-Optimus-based model compared to a CNN-based model. Specifically, average $R^2$ improved from 0.575 to 0.871, and average $PQ$ score improved from 0.450 to 0.492, indicating better alignment with actual cell counts and enhanced segmentation quality. The distilled model maintains comparable performance while reducing parameter count by a factor of 48. By reducing computational complexity and integrating into workflows, this approach may significantly impact diagnostics, reduce pathologist workload, and improve outcomes. Although the method shows promise, extensive validation is necessary prior to clinical deployment.


Sparse Space-Time Deconvolution for Calcium Image Analysis

Ferran Diego Andilla, Fred A. Hamprecht

Neural Information Processing Systems

We describe a unified formulation and algorithm to find an extremely sparse representation for Calcium image sequences in terms of cell locations, cell shapes, spike timings and impulse responses. Solution of a single optimization problem yields cell segmentations and activity estimates that are on par with the state of the art, without the need for heuristic pre-or postprocessing. Experiments on real and synthetic data demonstrate the viability of the proposed method.


SELMA3D challenge: Self-supervised learning for 3D light-sheet microscopy image segmentation

Chen, Ying, Al-Maskari, Rami, Horvath, Izabela, Ali, Mayar, Hoher, Luciano, Yang, Kaiyuan, Lin, Zengming, Zhai, Zhiwei, Shen, Mengzhe, Xun, Dejin, Wang, Yi, Xu, Tony, Goubran, Maged, Wu, Yunheng, Mori, Kensaku, Paetzold, Johannes C., Erturk, Ali

arXiv.org Artificial Intelligence

Recent innovations in light sheet microscopy, paired with developments in tissue clearing techniques, enable the 3D imaging of large mammalian tissues with cellular resolution. Combined with the progress in large-scale data analysis, driven by deep learning, these innovations empower researchers to rapidly investigate the morphological and functional properties of diverse biological samples. Segmentation, a crucial preliminary step in the analysis process, can be automated using domain-specific deep learning models with expert-level performance. However, these models exhibit high sensitivity to domain shifts, leading to a significant drop in accuracy when applied to data outside their training distribution. To address this limitation, and inspired by the recent success of self-supervised learning in training generalizable models, we organized the SELMA3D Challenge during the MICCAI 2024 conference. SELMA3D provides a vast collection of light-sheet images from cleared mice and human brains, comprising 35 large 3D images-each with over 1000^3 voxels-and 315 annotated small patches for finetuning, preliminary testing and final testing. The dataset encompasses diverse biological structures, including vessel-like and spot-like structures. Five teams participated in all phases of the challenge, and their proposed methods are reviewed in this paper. Quantitative and qualitative results from most participating teams demonstrate that self-supervised learning on large datasets improves segmentation model performance and generalization. We will continue to support and extend SELMA3D as an inaugural MICCAI challenge focused on self-supervised learning for 3D microscopy image segmentation.


Reviews: OnACID: Online Analysis of Calcium Imaging Data in Real Time

Neural Information Processing Systems

This paper proposes an online framework for analyzing calcium imaging data. This framework is built upon the popular and now widely used constrained non-negative matrix factorization (CNMF) method for cell segmentation and calcium time-series analysis (Pnevmatikakis, et al., 2016). While the existing CNMF approach is now being used by many labs across the country, I've talked to many neuroscientists that complain that this method cannot be applied to large datasets and thus its application has been limited. This work extends this method to a real-time decoding setting, making it an extremely useful contribution for the neuroscience community. The paper is well written and the results are compelling.


Multimodal Alignment of Histopathological Images Using Cell Segmentation and Point Set Matching for Integrative Cancer Analysis

Jiang, Jun, Moore, Raymond, Novotny, Brenna, Liu, Leo, Fogarty, Zachary, Guo, Ray, Svetomir, Markovic, Wang, Chen

arXiv.org Artificial Intelligence

Abstract: Histopathological imaging is vital for cancer research and clinical practice, with multiplexed Immunofluorescence (MxIF) and Hematoxylin and Eosin (H&E) providing complementary insights. However, aligning different stains at the cell level remains a challenge due to modality differences. In this paper, we present a novel framework for multimodal image alignment using cell segmentation outcomes. By treating cells as point sets, we apply Coherent Point Drift (CPD) for initial alignment and refine it with Graph Matching (GM). Evaluated on ovarian cancer tissue microarrays (TMAs), our method achieves high alignment accuracy, enabling integration of cell-level features across modalities and generating virtual H&E images from MxIF data for enhanced clinical interpretation. Keywords: Histopathology alignment, Histopathology registration, Bioimage analysis Introduction: As an important approach to reveal cell level details in cancer, histopathological images have been widely used in both clinic practice for diagnostic decision making and treatment follow up. Following different staining protocols, each modality of histopathology has its unique strength in highlighting specific aspects within tumor immune microenvironment (TIME). Among which, multiplexed Immunofluorescence (MxIF) images provide refined immune cell phenotyping, making it a favorable research tool for revealing cell behaviors in TIME. However, this imaging technique now is mainly used for research purposes due to the low reliability of marker signals caused by complex cyclic staining processes. On the other hand, H&E (Hematoxylin and Eosin) staining plays an irreplaceable role in providing standard clinical references by revealing cell morphology and texture patterns.